1. Essential storage technologies
One of the few constants in Microsoft Windows operating system administration is that data
storage needs are ever increasing. It seems that only a few years ago a
1-terabyte (TB) hard disk was huge and something primarily reserved for
Windows servers rather than Windows workstations. Now Windows
workstations ship with large hard disks as standard equipment, and some
even ship with striped drives that allow workstations to have spanned
drives that have multi-terabyte volumes—and all of that data must be
backed up and stored somewhere other than on the workstations to protect
it. This has meant that back-end storage solutions have had to scale
dramatically as well. Server
solutions that were once used for enterprise-wide implementations are
now being used increasingly at the departmental level, and the
underlying architecture for the related storage solutions has had to
change dramatically to keep up.
Using internal and external storage devices
To help meet the increasing demand for data storage and changing
requirements, organizations are deploying servers with a mix of internal
and external
storage. In internal-storage configurations, drives are connected inside
the server chassis to a local disk controller and are said to be
directly attached. You’ll sometimes see an internal storage device referred to as direct-attached storage (DAS).
In external-storage configurations, servers connect to external,
separately managed collections of storage devices that are either network-attached or part of a storage area network. Although the terms network-attached storage (NAS) and storage area network (SAN)
are sometimes used as if they are one and the same, the technologies
differ in how servers communicate with the external drives.
NAS devices are connected through a regular Transmission Control Protocol/Internet Protocol (TCP/IP) network. All server-storage communications go across the organization’s local area network (LAN), as shown in Figure 1, and typically use file-based protocols for communications, which can include Server Message Block (SMB), Distributed File System (DFS), and Network File System (NFS). This means the available bandwidth on the network can be shared by clients, servers, and NAS devices.
For best performance, the network should be running at 1 gigabit per
second (Gbps) or higher. Networks operating at slower speeds can
experience a serious decrease in performance as clients, services, and storage devices try to communicate using the limited bandwidth.
A SAN typically is physically separate from the LAN and is independently managed. As shown in Figure 2, this isolates the server-to-storage communications so that traffic doesn’t affect communications between clients and servers. Several SAN technologies are implemented, including Fibre Channel Protocol (FCP), a more traditional SAN technology that delivers high reliability and performance, and Internet SCSI (iSCSI),
which delivers good reliability and performance at a lower cost than
Fibre Channel. As the name implies, iSCSI uses TCP/IP networking
technologies on the SAN so that servers can communicate with storage devices using IP. The SAN is still isolated from the organization’s LAN.
You should be aware that iSCSI uses traditional IP facilities to
transfer data over LANs, wide area networks (WANs), or the Internet.
Here, iSCSI clients (initiators) send Small Computer System Interface (SCSI) commands to targeted iSCSI storage devices (targets) on remote servers. iSCSI consolidates storage and allows hosts—which can include web, application, and database servers—to access the storage as if it were locally attached. Initiators can locate storage resources using Internet Storage Name Service (iSNS). iSNS isn’t required for communications, but it does provide management services similar to those for Fibre Channel networks. iSNS emulates the fabric services of Fibre Channel and can manage both Fibre Channel and iSCSI devices.
Although Fibre Channel requires special cabling, iSCSI uses standard Ethernet cabling and technically can operate over the same network as standard IP traffic. However, if iSCSI isn’t operated on a dedicated network or subnet, performance can be severely degraded.
With TCP/IP, TCP is the transport protocol for IP networks. With Fibre Channel, FCP
is a transport protocol used to transport SCSI commands over the Fibre
Channel network. Fibre Channel networks can use a variety of topologies,
including the following:
-
Point-to-point (FC-PTP), where two devices are connected directly.
-
Arbitrated loop (FC-AL), where all devices are in a ring, similar to token ring networking.
-
Switched fabric (FC-SW), where all devices or device rings are connected to switches, similar to Ethernet.
The standard model for Fibre Channel has five layers:
-
FC0, the physical layer, which includes cables and connectors
-
FC1, the data-link layer
-
FC2, the network layer
-
FC3, the common services layer
-
FC4, the protocol-mapping layer
Windows Server 2012 includes support for Fibre Channel over Ethernet (FCoE),
a technology that allows IP network and SAN data traffic to be
consolidated on a single network. FCoE encapsulates Fibre Channel frames
over Ethernet and supports 10-Gbps and higher networks. With FCoE, the
FC0 and FC1 layers of the Fibre Channel model are replaced with Ethernet
and FCoE operates in the FC2, or network, layer. This is different from
iSCSI, which runs on top of TCP and IP. Additionally, while iSCSI is
routable across IP networks, FCoE isn’t routable in the IP layer and
won’t work across routed IP networks.
You should also note that although Fibre Channel has priority-based
flow controls, these controls aren’t part of standard Ethernet. As a
result, both FCoE and iSCSI needed enhancements to support
priority-based flow controls and prevent the frame loss that might occur
otherwise. These enhancements, provided in the Data
Center Bridging suite of Institute of Electrical and Electronics
Engineers (IEEE) standards, include the encapsulation of native frames,
extensions to Ethernet to prevent frame loss, and mapping between
ports/IDs and Ethernet media access control (MAC) addresses.
Several competing network protocols are available to provide fabric functionality to Fibre Channel devices over an IP network and to make the technology work over long distances. One is called Internet Fibre Channel Protocol (iFCP).
iFCP uses gateways and routing to enable connectivity and TCP for error
detection and correction as well as congestion control. A similar
technology called Fibre Channel over IP (FCIP) also is available. FCIP
uses storage tunneling, where Fibre Channel frames are encapsulated and then forwarded over an IP network using TCP.
Windows Server 2012 includes many features for working with SANs and handling storage management in general. Volume Shadow Copy Service (VSS) allows administrators to create point-in-time copies of volumes and individual files called snapshots.
This makes it possible to back up these items while files are open and
applications are running and to restore them to a specific point in
time. You also can use VSS to create point-in-time copies of documents
on shared folders. These copies are called shadow copies.
Note
Users can recover their own files when VSS is enabled. After you
configure shadow copy, point-in-time backups of documents contained in
the designated shared folders are created automatically, and users can
quickly recover files that have been deleted or unintentionally altered
as long as the Shadow Copy Client has been installed on their computer.
The basic VSS functionality is built into the file and storage services and accessed through the File Server VSS provider. You can extend the basic functions in several ways. One of these ways is to add the File
Server VSS Agent Service. You use this role service to create
consistent snapshots of server application data, such as virtual machine
files from Hyper-V. You install the agent service on a file server when
you want to back up applications that are storing data files on the
file server. Here, you are backing
up application data stored on file shares, which is different from user
data stored on file shares (which is managed using the standard File
Server VSS provider).
Windows Server 2012 also includes storage
providers. Storage providers make it possible for storage devices from
multiple vendors to interoperate. To do this, Microsoft provides Storage Management application programming interfaces (APIs) that management tools
and storage hardware can use, allowing for a unified interface for
managing storage devices from multiple vendors and making it easier for
administrators to manage a mixed-storage environment. Standard storage
providers are built into the file and storage services.
Windows Server 2012 also supports the Storage Management Initiative (SMI-S) standard and storage providers that are compliant with this standard. Add this support by adding the Windows
Standards-Based Storage Management feature. This feature enables the
discovery, management, and monitoring of storage devices using
management tools that support the SMI-S standard. It does this by
installing related Windows Management Instrumentation (WMI) classes and
cmdlets.
When your file servers are using iSCSI, Fibre Channel, or both
storage device types, you might also want to install Multipath IO, iSNS
Server service, and Data Center Bridging—all of which are installable features.
Multipath I/O supports SAN connectivity by establishing multiple sessions or connections to storage devices. Using Multipath I/O, you can configure as many as 32 separate physical paths
to external storage devices that can be used simultaneously and load
balanced if necessary. The purpose of having multiple paths is to have
redundancy and possibly increased throughput. If you have multiple host
bus adapters as well, you improve the chances of recovery from a path
failure. However, if a path failure occurs, there might be a short
period of time when the drives on the SAN aren’t accessible. Microsoft
Multipath I/O (MPIO) supports iSCSI, Fibre Channel, and Serial Attached SCSI (SAS).
iSNS Server service helps iSNS clients discover iSCSI storage
devices on an Ethernet network and also automates the management and
configuration of iSCSI and Fibre Channel storage devices (as long as
Fibre Channel devices use iFCP gateways). Data
Center Bridging helps manage bandwidth allocation for offloaded storage
traffic on converged network adapters, which is useful with iSCSI and
FCoE.
Other file and storage features you might want to install on file servers include the following:
-
Enhanced Storage Supports additional functions made available by devices that support hardware encryption and enhanced storage. Enhanced
storage devices support IEEE standard 1667 to provide enhanced
security, which can include authentication at the hardware level of the
storage device.
-
Windows Search Service
Allows for faster
file searches for resources on the server from clients that are
compatible with this service. Keep in mind, however, this feature is
designed primarily for desktop and small office implementations (and not
for large enterprises).
-
Windows Server Backup
The standard backup utility included with Windows Server 2012.
Server Manager is your primary tool for managing storage. Windows Server 2012 also has several command-line tools for managing local storage and storage-replication services. These tools include the following:
-
DiskPart
Used to manage
basic and dynamic disks as well as the partitions and volumes on those
disks. It is the command-line counterpart to the Disk Management tool
and also includes features not found in the graphical user interface
(GUI) tool, such as the capability to extend partitions on basic disks.
Note
DiskPart cannot be used to manage Storage Spaces. Windows 8 and
Windows Server 2012 might be the last versions of Windows to support
Disk Management, DiskPart, and DiskRaid. The Virtual Disk Service (VDS)
COM interface is being superseded by the Storage Management API. You can continue to use Disk Management and DiskPart to manage basic and dynamic disks.
-
Dfsdiag
Used to perform troubleshooting and diagnostics for DFS.
-
Dfsradmin
Used to manage and
monitor DFS replication throughout the enterprise. You’ll use this tool
for troubleshooting and diagnosing problems as well. This tool replaces
Health_Chk and the other tools it worked with.
-
Dfsutil
Used to configure DFS, back up and restore DFS directory trees (namespaces), copy directory trees, and troubleshoot DFS.
-
Fsutil
Used to get
detailed drive information and perform advanced file system maintenance.
You can manage sparse files, reparse points, disk quotas, and other
advanced features of NTFS.
-
Mountvol
Used to manage
volume automounting. By using volume mount points, administrators can
mount volumes to empty NTFS folders, giving the volumes a drive path
rather than a drive letter. This means it is easier to mount and unmount
volumes, particularly with SANs.
-
Vssadmin
Used to view and manage the Volume Shadow Copy Service and its configuration.
Many Windows PowerShell cmdlets are available for managing storage as well. These cmdlets are module-specific and correspond to the storage component you want to manage. Available modules include
-
BitsTransfer
Used to manage the Background Intelligent Transfer Service (BITS).
-
BranchCache
Used to configure and check the status of Windows BranchCache.
-
DFSN
Used to manage DFS Namespaces.
-
FileServerResourceManager
Used to manage File Server Resource Manager.
-
iSCSI
Used to manage iSCSI connections, sessions, targets, and ports.
-
IscsiTarget
Used to mount and manage iSCSI virtual disks.
-
SmbShare
Used to configure and check the status of standard file sharing.
-
Storage
Used to manage
disks, partitions, and volumes, as well as storage pools and Storage
Spaces. It cannot be used to manage dynamic disks.
The easiest way to learn
more about these PowerShell modules is to import a particular module,
determine which cmdlets are associated with it, and then examine how the cmdlets are used. You import a module using the following syntax:
Import-module ModuleName
Here, ModuleName is the name of the module to import, such as the following:
Import-module iscsi
You list the cmdlets associated with an imported module using
get-command -module ModuleName
Here, ModuleName is the name of the module you want to examine, such as the following:
get-command -module iscsi
After you list the cmdlets associated with an imported module, you can get more information about a particular cmdlet using
get-help CmdletName
-detailed
Here, CmdletName is the name of the cmdlet to examine in detail, such as the following:
get-help connect-iscsitarget -detailed
Storage-management role services
You use File And Storage Services to configure your file servers. Several file and storage services are installed by default with any installation of Windows Server 2012. These include File Server, which you use to manage file shares that users can access over the network, and Storage Services, which you use to manage various types of storage, including storage
pools and storage spaces. Storage pools group disks so that you can
create virtual disks from the available capacity. Each virtual disk you
create is a storage space.
Windows Server 2012 also supports thin provisioning of your storage spaces. With thin provisioning,
you can create large virtual disks without having the actual space
available. This allows you to provision storage to meet future needs and
grow storage as needed. You also can reclaim storage that is no longer
needed by trimming storage. To see how thin provisioning works, consider the following scenarios:
-
Your file server is connected to a storage array with 2 TBs of actual
storage, but with the capability to grow to 10 TBs as needed (by
installing additional hard disks). When you set up storage, you
provision it as if additional storage was already available. One way to
do this is to create a storage pool that has a total size of 10 TBs and then create 5 thin disks with 2 TBs of storage each.
-
Your eight file servers are connected to a SAN with 10 TBs of actual
storage, but with the capability to grow to 80 TBs as needed (by
installing additional hard disks). When you set up storage, you
provision it as if additional storage was already available. One way to
do this is to create a storage pool on each file server that has a total
size of 10 TBs. Next, within each storage pool, you create 5 thin disks
with 2 TBs of storage each.
With thin-disk provisioning, volumes use space from the storage pool as needed, up to the volume size. Here, the actual storage
utilization for a volume is based on the total size of the data stored
on the volume. If a volume doesn’t grow, the storage space is never
allocated and isn’t wasted.
Contrast this to fixed-disk
provisioning, where a volume has a fixed size and uses space from the
storage pool equal to its volume size. Here, the storage utilization for
a volume is fixed and based on the total size of the volume itself.
Because the storage is pre-allocated with a fixed size, any unused space
isn’t available for other volumes.
You can enhance file storage in many ways using the additional role
services that are available for File And Storage Services. One of the
first role services you might consider using is BranchCache For Network
Files. You add the BranchCache For Network Files role service to enable enhanced support for Windows BranchCache on your file servers and to optimize data transfer over the WAN for caching.
Windows BranchCache is a file-caching feature that works in conjunction with BITS. By enabling branch
caching in Group Policy, you allow computers to retrieve documents and
other types of files from a local cache rather than retrieving files
from servers over the network. This improves response times and reduces
transfer times.
Branch caching
can be used in either a distributed cache mode or a hosted cache mode.
With the distributed cache mode, desktop computers running compatible
versions of Windows host and send distributed file caches, and caching
servers running at remote offices are not needed. With the hosted cache
mode, compatible file servers at remote offices host local file caches
and send them to clients. Generally, whether distributed or hosted, the
caches at one office location are separate from caches at other office
locations. That said, the Active Directory configuration and the way
Group Policy is applied ultimately determine whether computers are
considered to be part of one office location or another.
Branch caching is designed as a WAN solution. It optimizes bandwidth
usage for files transferred with either SMB or Hypertext Transfer
Protocol (HTTP). Your content servers can be located anywhere on your
network, as well as in public or private cloud datacenters. You enable
branch caching on web servers and BITS-based application servers by
adding the BranchCache feature. If you are deploying hosted cache
servers, you add the BranchCache feature to these servers as well. You
don’t install this feature on your file servers, however. Instead, you
add the BranchCache For Network Files role service.
The Data Deduplication service can be installed with or without the BranchCache For Network Files role service. Data Deduplication uses subfile, variable-size chunking and compression to achieve higher storage efficiency.
The service does this by segmenting files into 32-KB to 128-KB chunks,
identifying duplicate chunks, and replacing the duplicates with
references to a single copy. Because optimized files are stored as
reparse points, files on the volume are no longer stored as data
streams. Instead, they are replaced with stubs that point to data blocks
within a common chunk store.
Previously, I mentioned the File Server VSS Agent Service, which you install on file servers when you want to ensure that you can make consistent backups
of server application data using VSS-aware backup applications. When
working with iSCSI, you also must install the iSCSI target VSS hardware
provider on the initiator server you use to perform backups of iSCSI
virtual disks. This ensures that the snapshots are
application-consistent and can be restored at the logical unit number
(LUN) level. If you don’t use the iSCSI target VSS hardware provider on
the initiator, server backups might not be consistent and you might not
be able to completely recover your iSCSI virtual disks. On management
computers running storage-management applications, you must install the
iSCSI target Virtual Disk Service (VDS) hardware provider. The iSCSI
target VSS hardware provider and the iSCSI target VDS hardware provider
are part of the iSCSI Target Storage Provider role service.
Another role service you might want to use with iSCSI is the iSCSI Target Server service. This role service turns any computer running Windows Server into a network-accessible block storage device. You can use this continuously available block storage
to support network/diskless boot, shared storage on non-Windows iSCSI
initiators, and development environments where you need to test
applications prior to deploying them to SAN storage. Because the service
uses standard Ethernet for its transport, no additional hardware is
needed.
Although SMB is the default file-sharing protocol, other file-sharing solutions are available, including Network File System (NFS) and Distributed File System (DFS). To enable NFS on your file servers, you add the Server
For NFS service. This service provides a file-sharing solution for
enterprises with mixed Windows and UNIX environments. When you install
Server For NFS, users can transfer files between Windows Server and UNIX
operating systems using the NFS protocol. DFS,
on the other hand, isn’t an interoperability solution. Instead, DFS is a
robust, enterprise solution for file sharing that you can use to create
a single directory tree that includes multiple file servers and their
file shares.
The DFS tree can contain more than 5000 shared folders in a domain
environment (or 50,000 shared folders on a standalone server), located
on different servers, enabling users to find files or folders
distributed across
the enterprise easily. DFS directory trees can also be published in the
Active Directory directory service so that they are easy to search.
DFS has two key components:
-
DFS Namespaces
You can use DFS Namespaces to group shared folders located on different
servers into one or more logically structured namespaces. Each
namespace appears as a single shared folder with a series of subfolders.
However, the underlying structure of the namespace can come from shared
folders on multiple servers in different sites.
-
DFS Replication
You can use DFS
Replication to synchronize folders on multiple servers across local or
wide area network connections using a multimaster replication engine.
The replication engine uses the Remote Differential Compression (RDC) protocol to synchronize only the portions of files that have changed since the last replication.
You can use DFS Replication with DFS Namespaces or by itself. When a
domain is running in a Windows 2008 domain functional level or higher,
domain controllers use DFS Replication to replicate the SYSVOL
directory.
File Server Resource Manager (FSRM)
installs a suite of tools that administrators can use to better manage
data stored on servers. Using FSRM, you can do the following:
-
Define file-screening policies
You use file-screening policies to block unauthorized, potentially
malicious types of content. You can configure active screening, which
does not allow users to save unauthorized files, or passive screening,
which allows users to save unauthorized files but monitors or warns
about usage (or you can configure both).
-
Configure Resource Manager disk quotas
Using Resource
Manager disk quotas, you can manage disk space usage by folder and by
volume. You can configure quotas with a specific limit as a hard limit
(meaning a limit can’t be exceeded) or a soft limit (meaning a limit can
be exceeded).
-
Generate storage reports
You can generate
storage reports as part of disk-quota and file-screening management.
Storage reports identify file usage by owner, type, and other
parameters. They also help identify users and applications that violate
screening policies.
Booting from SANs, and using SANs with clusters
Windows Server 2012 supports booting from a SAN, having multiple
clusters attached to the same SAN, and having a mix of clusters and
standalone servers attached to the same SAN. To boot from a SAN, the
external storage devices and the host bus adapters of each server must be configured appropriately to allow booting from the SAN.
When multiple servers must boot from the same external storage
device, you must either configure the SAN in a switched environment or
you must directly attach it from each host to one of the storage
subsystem’s Fibre Channel ports. A switched or direct-to-port
environment allows the servers to be separate from each other, which is essential for booting from a SAN.
Each server on the SAN must have exclusive
access to the logical disk from which it is booting, and no other
server on the SAN should be able to detect or access that logical disk.
For multiple-cluster installations, the SAN must be configured so that a
set of cluster disks is accessible only by one cluster and is
completely hidden from the rest of the clusters. By default, Windows
Server 2012 will attach and mount every logical disk that it detects
when the host bus adapter driver loads, and if multiple servers mount
the same disk, the file system can be damaged.
To prevent file system damage, the SAN must be configured in such a
way that only one server can access a particular logical disk at a time.
You can configure disks for exclusive access using a type of logical
unit number (LUN) management such as LUN masking, LUN zoning, or a
preferred combination of these techniques. You can use the File And
Storage Services node in the Server Manager console to manage Fibre
Channel and iSCSI SANs that support Storage Management APIs and have a
configured storage provider.
Sever Message Block (SMB) is the standard technology used for file sharing. SMB
3.0 was released as part of Windows 8 and Windows Server 2012. Earlier
releases of Windows support different versions of SMB. Windows 7 and
Windows Server 2008 R2 support SMB 2.1. Windows Vista and Windows Server
2008 support SMB 2.0.
SMB 2.1 was an incremental improvement over SMB 2.0, which brought
several important changes for file sharing, including support for
BranchCache and large maximum transmission units (MTUs). SMB 3.0 has the following important improvements:
-
SMB Direct Provides support for network adapters that have Remote Direct Memory Access (RDMA)
capability, allowing fast, offloaded data transfers and helping achieve
high speeds and low latency while using few CPU resources. Previously,
this capability was one of the key advantages of Fibre Channel block storage.
-
SMB encryption
Provides secure
data transfer by encrypting data automatically and without having to
deploy Internet Protocol security (IPsec) or another encryption
solution. SMB encryption can be enabled for an entire server (meaning
for all its file shares) or for individual file shares as needed.
-
SMB Multichannel
Allows servers to
simultaneously use multiple connections and network interfaces,
increasing fault tolerance and throughput. Configure network interface
card (NIC) teaming to take advantage of this feature.
-
SMB scale-out
Allows clustered file servers in an active-active configuration to
aggregate bandwidth across the cluster. This provides simultaneous
access to data files through all nodes in the cluster and allows
administrators to load balance across cluster nodes simply by moving
file server clients.
-
SMB signing
Introduces AES-CCM and AES-CMAC for signing. Typically, signing with
Advanced Encryption Standard (AES) is dramatically faster than signing
with HMAC-SHA256 (which was used by SMB 2/SMB 2.1).
-
SMB Transparent Failover
Allows
administrators to perform maintenance on nodes in a clustered file
server without affecting applications storing data on the server’s file
shares. If a failure occurs, SMB clients transparently reconnect to
another cluster node. This provides the benefits of a multicontroller
storage array without having to purchase one.
Note
Not only can you use the SMB Direct, SMB Multichannel, and SMB scale-out features
to implement manageable, scalable active-active file shares, you also
can use these features to take an existing Fibre Channel SAN and share
its storage over SMB 3.0. This gives you a gateway to a SAN and extends your storage options.
Keep in mind that SMB is a client/server technology. For backward
compatibility, newer clients continue to support older versions of the
technology. While establishing a connection to a file share, an SMB
client negotiates the SMB version to use for that connection based on
the highest commonly supported SMB version. This process is referred to
as dialect negotiation.
During dialect negotiation, the version downgrade is automatic, such that an SMB
3.0 client connecting to a SMB 2.1 server will use SMB 2.1 for that
connection. Because older versions of SMB are less secure, forcing a
client to downgrade the version used is one way someone might try to
gain unauthorized access.
SMB 3.0 includes a security feature that attempts to detect forced
downgrade attempts. If such an attempt is detected, the connection is
disconnected and Event ID 1005 is logged in the
Microsoft-Windows-SmbServer/Operational log. This security feature works
only when a client tries to force a downgrade from SMB 3.0 to SMB
2.0/SMB 2.1. It doesn’t work if a client attempts to downgrade to SMB 1.0. For this reason, Microsoft recommends that you disable support for SMB 1.0.
If you want to ensure that SMB encryption is used whenever possible, you can enable SMB encryption on either a per-server or per–file
share basis. To enable encryption for an entire server and all its SMB
file shares, run the following command at an elevated PowerShell prompt
on the server:
Set-SmbServerConfiguration -EncryptData $true
To enable encryption for a specific file share rather than an entire
server, run the following command at an elevated PowerShell prompt on
the server:
Set-SmbShare -Name ShareName
-EncryptData $true
Here, ShareName is the name of the share for which encryption should be used when possible, such as the following:
Set-SmbShare -Name CorpData -EncryptData $true
You can turn on encryption when you create a share as well. To do
this, run the following command at an elevated PowerShell prompt on the
server:
New-SmbShare -Name ShareName
-Path PathName
-EncryptData $true
Here, ShareName is the name of the share for which encryption should be used when possible and PathName is the path to an existing folder to share, such as the following:
New-SmbShare -Name CorpData -Path D:\Data -EncryptData $true
When you want to enable encryption support on multiple file servers,
you can invoke remote commands. Consider the following example:
$servers = get-content c:\files\server-list.txt
Invoke-command -computername $servers -scriptblock {Set-SmbServerConfiguration
-EnableSMB1Protocol $false}
Here, C:\Files\Server-list.txt is the path to a text file containing a
list of the file servers to configure. In this file, each file server
should be listed on a separate line, as shown here:
FileServer12
FileServer23
FileServer45
The command will then be invoked on each of the file servers.